50 research outputs found

    Exploring the synergistic potential of quantum annealing and gate model computing for portfolio optimization

    Full text link
    Portfolio optimization is one of the most studied problems for demonstrating the near-term applications of quantum computing. However, large-scale problems cannot be solved on today's quantum hardware. In this work, we extend upon a study to use the best of both quantum annealing and gate-based quantum computing systems to enable solving large-scale optimization problems efficiently on the available hardware. The existing work uses a method called Large System Sampling Approximation (LSSA) that involves dividing the large problem into several smaller problems and then combining the multiple solutions to approximate the solution to the original problem. This paper introduces a novel technique to modify the sampling step of LSSA. We divide the portfolio optimization problem into sub-systems of smaller sizes by selecting a diverse set of assets that act as representatives of the entire market and capture the highest correlations among assets. We conduct tests on real-world stock data from the Indian stock market on up to 64 assets. Our experimentation shows that the hybrid approach performs at par with the traditional classical optimization methods with a good approximation ratio. We also demonstrate the effectiveness of our approach on a range of portfolio optimization problems of different sizes. We present the effects of different parameters on the proposed method and compare its performance with the earlier work. Our findings suggest that hybrid annealer-gate quantum computing can be a valuable tool for portfolio managers seeking to optimize their investment portfolios in the near future.Comment: 12 pages, 4 figures, 1 tabl

    Campaign Ad - Betty Sutton

    Get PDF
    Campaign ad for Political Science class - PSCI 217 - Media and Politics

    PlantDoc: A Dataset for Visual Plant Disease Detection

    Full text link
    India loses 35% of the annual crop yield due to plant diseases. Early detection of plant diseases remains difficult due to the lack of lab infrastructure and expertise. In this paper, we explore the possibility of computer vision approaches for scalable and early plant disease detection. The lack of availability of sufficiently large-scale non-lab data set remains a major challenge for enabling vision based plant disease detection. Against this background, we present PlantDoc: a dataset for visual plant disease detection. Our dataset contains 2,598 data points in total across 13 plant species and up to 17 classes of diseases, involving approximately 300 human hours of effort in annotating internet scraped images. To show the efficacy of our dataset, we learn 3 models for the task of plant disease classification. Our results show that modelling using our dataset can increase the classification accuracy by up to 31%. We believe that our dataset can help reduce the entry barrier of computer vision techniques in plant disease detection.Comment: 5 Pages, 6 figures, 3 table

    Sampling Semantic Data Stream: Resolving Overload and Limited Storage Issues

    Get PDF
    International audienceThe Semantic Web technologies are being increasingly used for exploiting relations between data. In addition, new tendencies of real-time systems, such as social networks, sensors, cameras or weather information , are continuously generating data. This implies that the data and the links between them are becoming extremely vast. Such huge quantity of data needs to be analyzed, processed, as well as stored, if necessary. In this paper, we propose sampling operators that allow us to drop RDF Triples from the incoming data. Thereby, helping us to reduce the load on existing engines like CQELS, C-SPARQL, which are able to deal with big and linked data. Hence, the processing efforts, time as well as required storage space will reduce remarkably. We have proposed Uniform Random Sampling, Reservoir Sampling and Chain Sampling operators which may be implemented depending on the application

    Revisiting Prompt Engineering via Declarative Crowdsourcing

    Full text link
    Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone. There has been an advent of toolkits and recipes centered around so-called prompt engineering-the process of asking an LLM to do something via a series of prompts. However, for LLM-powered data processing workflows, in particular, optimizing for quality, while keeping cost bounded, is a tedious, manual process. We put forth a vision for declarative prompt engineering. We view LLMs like crowd workers and leverage ideas from the declarative crowdsourcing literature-including leveraging multiple prompting strategies, ensuring internal consistency, and exploring hybrid-LLM-non-LLM approaches-to make prompt engineering a more principled process. Preliminary case studies on sorting, entity resolution, and imputation demonstrate the promise of our approac

    StaticFixer: From Static Analysis to Static Repair

    Full text link
    Static analysis tools are traditionally used to detect and flag programs that violate properties. We show that static analysis tools can also be used to perturb programs that satisfy a property to construct variants that violate the property. Using this insight we can construct paired data sets of unsafe-safe program pairs, and learn strategies to automatically repair property violations. We present a system called \sysname, which automatically repairs information flow vulnerabilities using this approach. Since information flow properties are non-local (both to check and repair), \sysname also introduces a novel domain specific language (DSL) and strategy learning algorithms for synthesizing non-local repairs. We use \sysname to synthesize strategies for repairing two types of information flow vulnerabilities, unvalidated dynamic calls and cross-site scripting, and show that \sysname successfully repairs several hundred vulnerabilities from open source {\sc JavaScript} repositories, outperforming neural baselines built using {\sc CodeT5} and {\sc Codex}. Our datasets can be downloaded from \url{http://aka.ms/StaticFixer}
    corecore